43 research outputs found

    Lower Complexity Bounds for Lifted Inference

    Full text link
    One of the big challenges in the development of probabilistic relational (or probabilistic logical) modeling and learning frameworks is the design of inference techniques that operate on the level of the abstract model representation language, rather than on the level of ground, propositional instances of the model. Numerous approaches for such "lifted inference" techniques have been proposed. While it has been demonstrated that these techniques will lead to significantly more efficient inference on some specific models, there are only very recent and still quite restricted results that show the feasibility of lifted inference on certain syntactically defined classes of models. Lower complexity bounds that imply some limitations for the feasibility of lifted inference on more expressive model classes were established early on in (Jaeger 2000). However, it is not immediate that these results also apply to the type of modeling languages that currently receive the most attention, i.e., weighted, quantifier-free formulas. In this paper we extend these earlier results, and show that under the assumption that NETIME =/= ETIME, there is no polynomial lifted inference algorithm for knowledge bases of weighted, quantifier- and function-free formulas. Further strengthening earlier results, this is also shown to hold for approximate inference, and for knowledge bases not containing the equality predicate.Comment: To appear in Theory and Practice of Logic Programming (TPLP

    Lifted graphical models: a survey

    Get PDF
    Lifted graphical models provide a language for expressing dependencies between different types of entities, their attributes, and their diverse relations, as well as techniques for probabilistic reasoning in such multi-relational domains. In this survey, we review a general form for a lifted graphical model, a par-factor graph, and show how a number of existing statistical relational representations map to this formalism. We discuss inference algorithms, including lifted inference algorithms, that efficiently compute the answers to probabilistic queries over such models. We also review work in learning lifted graphical models from data. There is a growing need for statistical relational models (whether they go by that name or another), as we are inundated with data which is a mix of structured and unstructured, with entities and relations extracted in a noisy manner from text, and with the need to reason effectively with this data. We hope that this synthesis of ideas from many different research groups will provide an accessible starting point for new researchers in this expanding field

    Lifted First-Order Probabilistic Inference

    Get PDF
    There has been a long standing division in AI between logical symbolic and probabilistic reasoning approaches. While probabilistic models can deal well with inherent uncertainty in many real-world domains, they operate on a mostly propositional level. Logic systems, on the other hand, can deal with much richer representations, especially first-order ones. In the last two decades, many probabilistic algorithms accepting first-order specifications have been proposed, but in the inference stage they still operate mostly on a propositional level, where the rich and useful first-order structure is not explicit anymore. In this thesis we present a framework for lifted inference on first-order models, that is, inference where the main operations occur on a first-order level, without the need to propositionalize the model. We clearly define the semantics of first-order probabilistic models, present an algorithm (FOVE) that performs lifted inference, and show detailed proofs of its correctness. Furthermore, we describe how to solve the Most Probable Explanation problem with a variant of FOVE, and present a new anytime probabilistic inference algorithm, ABVE, meant to generalize the ability of logical systems to gradually process a model and stop as soon as an answer is available

    Functional Subsumption in Feature Description Logic

    Get PDF
    Most machine learning algorithms rely on examples represented propositionally as feature vectors. However, most data in real applications is structured and better described by sets of objects with attributes and relations between them. Typically, ad-hoc methods have been used to convert such data to feature vectors, taking, in many cases, a significant amount of the computation. Propositionalization becomes more principled if generating features from structured data is done using a formal, domain independent language that describes feature types to be extracted. This language must have limited expressivity in order to be useful while some inference procedures on it are still tractable. In this chapter we present Feature Description Logic (FDL), proposed by Cumby & Roth, where feature extraction is viewed as an inference process (subsumption). We also present an extension to FDL we call Functional FDL. FDL is ultimately based on the unification of object attributes and relations between objects in order to detect feature types in examples. Functional subsumption provides further abstraction by using unification modulo a Boolean function representing similarity between attributes and relations. This greatly improves flexibility in practical situations by accomodating variations in attributes values and relation names, incorporating background knowledge (e.g., typos, number and gender, synomyms etc). We define the semantics of Functional subsumption and how to adapt the regular subsumption algorithm for implementing it

    First-order Probabilistic Inference Revisited

    Get PDF
    Following ideas in Poole~\poole, which we correct, formalize and extend, this paper presents the first provable algorithm for reasoning with probabilistic first-order representations at the {\em lifted} level. Specifically, the algorithm automates the process of probabilistic reasoning about populations of individuals, their properties and the relations between them, without the need to ground the probabilistic knowledge base. The algorithm makes use of unification to guide an interleaving of variable ordering and first-order variable elimination. Importantly, our contribution includes the formalization of concepts necessary to reason about the algorithm's correctness and its correctness proof
    corecore